In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
Make sure that you've downloaded the required human and dog datasets:
Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.
Download the human dataset. Unzip the folder and place it in the home directory, at location /lfw.
Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.
import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
There are 18982 total human images. There are 8351 total dog images.
In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
#!pip install opencv-python
#!pip install cv2-tools
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[1])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 1
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.
In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.
We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
Question 1: Use the code cell below to test the performance of the face_detector function.
human_files have a detected human face? dog_files have a detected human face? Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.
Answer: (You can print out your results and/or write your percentages in this cell)
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
count = 0
for i in human_files_short:
if face_detector(i):
count += 1
print(count/len(human_files_short)*100)
count = 0
for i in dog_files_short:
if face_detector(i):
count += 1
print(count/len(dog_files_short)*100)
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
95.0 18.0
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
### (Optional)
### TODO: Test performance of another face detection algorithm.
### Feel free to use as many code cells as needed.
In this section, we use a pre-trained model to detect dogs in images.
The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.
VGG16
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
from PIL import Image
import torchvision.transforms as transforms
# Set PIL to be tolerant of image files that are truncated.
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
#data_transform = transforms.Compose([transforms.RandomResizedCrop(256), transforms.ToTensor()])
data_transform = transforms.Compose([transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
image = Image.open(img_path)
tensor = data_transform(image)
tensor = tensor.unsqueeze(0)
if use_cuda:
tensor = tensor.cuda()
result = VGG16(tensor)
class_result = float(torch.argmax(result))
return class_result # predicted class index
teste = VGG16_predict('dogImages/train/012.Australian_shepherd/Australian_shepherd_00832.jpg')
teste
231.0
While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
if VGG16_predict(img_path) in range(151, 269):
return True
else:
return False
# true/false
teste = dog_detector('dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg')
teste
True
Question 2: Use the code cell below to test the performance of your dog_detector function.
human_files_short have a detected dog? dog_files_short have a detected dog?Answer:
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
count_dog = 0
for i in human_files_short:
if dog_detector(i) == True:
count_dog += 1
print(count_dog/len(human_files_short))
count_dog = 0
for i in dog_files_short:
if dog_detector(i) == True:
count_dog += 1
print(count_dog/len(dog_files_short))
0.01 1.0
We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
| Brittany | Welsh Springer Spaniel |
|---|---|
![]() |
![]() |
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
| Curly-Coated Retriever | American Water Spaniel |
|---|---|
![]() |
![]() |
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
| Yellow Labrador | Chocolate Labrador | Black Labrador |
|---|---|---|
![]() |
![]() |
![]() |
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
import os
from torchvision import datasets
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
transform_test = transforms.Compose([transforms.Resize(224),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
transform = transforms.Compose([transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
dataset = {}
loader_scratch = {}
dataset['train'] = datasets.ImageFolder(os.path.join('dogImages/train'), transform_test)
dataset['test'] = datasets.ImageFolder(os.path.join('dogImages/test'), transform)
dataset['valid'] = datasets.ImageFolder(os.path.join('dogImages/valid'), transform)
loader_scratch['train'] = torch.utils.data.DataLoader(dataset['train'], batch_size = 32, shuffle=True, num_workers=2)
loader_scratch['test'] = torch.utils.data.DataLoader(dataset['test'], batch_size = 16)
loader_scratch['valid'] = torch.utils.data.DataLoader(dataset['valid'], batch_size = 16)
print(len(dataset['train']))
print(len(dataset['test']))
print(len(dataset['valid']))
print(len(dataset['test'].classes))
6680 836 835 133
inputs, classes = next(iter(loader_scratch['train']))
inputs[0].shape
torch.Size([3, 224, 224])
classes
tensor([ 6, 103, 74, 46, 108, 77, 99, 75, 16, 61, 23, 97, 31, 124,
10, 121, 43, 9, 75, 107, 111, 102, 73, 37, 20, 112, 131, 101,
88, 116, 44, 14])
from torchvision import utils
def visualize_sample_images(inp):
inp = inp.numpy().transpose((1, 2, 0))
inp = inp * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))
inp = np.clip(inp, 0, 1)
fig = plt.figure(figsize=(60, 25))
plt.axis('off')
plt.imshow(inp)
plt.pause(0.001)
# Get a batch of training data.
inputs, classes = next(iter(loader_scratch['train']))
# Convert the batch to a grid.
grid = utils.make_grid(inputs, nrow=5)
# Display!
visualize_sample_images(grid)
Question 3: Describe your chosen procedure for preprocessing the data.
How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
R: I decided to use resize, random rotation of 25 and random flip so it gives a good variability of the kink of images we can find outside the training data. Altought i saw some examples on internet using RandomResizedCrop i couldnt understand the reason why using. I wen for the 224 pixels size because its a good starting point so we dont lose too much information on the pictures we are training and we can adapt to outside images easier. I added Centercrop later because not all the tensor were the same size i discovered i need them all to be the same size
Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
R: I decided to user only flip and a 25 degree rotation. I think that covers up most of images we are going to work with.
Answer:
torch.cuda.empty_cache() ### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 64, 3, stride = 1, padding=1)
self.conv2 = nn.Conv2d(64, 128, 3, stride = 1, padding =1)
self.conv3 = nn.Conv2d(128, 256, 3, stride = 1, padding =1)
self.conv4 = nn.Conv2d(256, 512, 3, stride = 1, padding =1)
self.conv5 = nn.Conv2d(512, 512, 3, stride = 1, padding =1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(25088, 512)
self.fc2 = nn.Linear(512, 512)
self.fc3 = nn.Linear(512, 133)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
## Define forward behavior
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = self.pool(F.relu(self.conv4(x)))
x = self.pool(F.relu(self.conv5(x)))
x = x.view(-1, 25088)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
return x
#-#-# You do NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
# trying to debug why the dimensions were not working
conv1 = nn.Conv2d(3, 32, 2, stride = 2, padding=0) # 112 pixes after conv, and 56 after pool
#56 x 56 x 36
conv2 = nn.Conv2d(32, 64, 2, stride = 2, padding =0) # 28 pixels after, 14 after pool
#14 x 14 x 64
pool = nn.MaxPool2d(2, 2)
fc1 = nn.Linear(14 * 14 * 64, 6000)
fc2 = nn.Linear(6000, 500) # 133 output since it is 133 classes
fc3 = nn.Linear(500, 133)
tensor_example = inputs[0]
print(tensor_example.size())
# apparently the conv2 expects the n_samples in the tensor dimension (n_samples, channels, height, width) # e.g., (1000, 1, 224, 224)
# adding unsqueeze to include 1 in the tensor. After further debbu
tensor_example = tensor_example.unsqueeze(0)
print(tensor_example.size())
tensor_example = conv1(tensor_example)
print(tensor_example.size())
tensor_example = pool(tensor_example)
print(tensor_example.size())
tensor_example = conv2(tensor_example)
print(tensor_example.size())
tensor_example = pool(tensor_example)
print(tensor_example.size())
tensor_example = tensor_example.view(-1, 64 * 14 * 14)
print(tensor_example.size())
tensor_example = fc1(tensor_example)
print(tensor_example.size())
tensor_example = fc2(tensor_example)
print(tensor_example.size())
tensor_example = fc3(tensor_example)
print(tensor_example.size())
tensor_example = tensor_example.squeeze()
print(tensor_example.size())
print(tensor_example)
torch.Size([3, 224, 224])
torch.Size([1, 3, 224, 224])
torch.Size([1, 32, 112, 112])
torch.Size([1, 32, 56, 56])
torch.Size([1, 64, 28, 28])
torch.Size([1, 64, 14, 14])
torch.Size([1, 12544])
torch.Size([1, 6000])
torch.Size([1, 500])
torch.Size([1, 133])
torch.Size([133])
tensor([-0.2461, 0.0105, -0.1061, 0.0987, -0.1113, 0.0884, 0.0187, -0.1838,
-0.0789, -0.0333, 0.0574, 0.0399, -0.0104, -0.1320, -0.1030, -0.0041,
0.0073, 0.0269, -0.2221, -0.1248, -0.0450, 0.1023, -0.0495, -0.0451,
-0.1712, 0.1907, -0.0465, -0.1432, -0.1048, 0.1256, -0.0241, -0.0539,
-0.0243, 0.0406, 0.1276, 0.0239, -0.0601, -0.1146, -0.0064, -0.2978,
0.0078, -0.0487, -0.0483, 0.0543, 0.0815, 0.0349, 0.0690, -0.1955,
-0.0337, 0.1523, 0.1824, 0.0533, -0.0190, -0.1032, 0.0576, -0.0185,
-0.0743, -0.1310, -0.0599, 0.0141, 0.0977, -0.0884, -0.0634, -0.1948,
-0.0754, 0.0786, 0.0241, 0.0746, -0.0654, -0.1168, -0.0102, -0.0600,
0.0216, 0.0460, 0.0357, -0.0063, -0.0271, -0.1156, -0.0743, 0.0823,
0.1462, 0.2088, -0.0401, 0.0439, 0.1173, -0.0048, -0.0155, 0.0883,
0.0759, 0.0595, -0.0444, 0.1688, 0.0373, -0.1232, -0.0604, -0.2350,
-0.0723, -0.0989, -0.2364, 0.1597, -0.0617, 0.0247, -0.0835, 0.0356,
-0.0357, 0.0582, -0.0592, 0.1415, 0.0691, -0.0106, 0.0669, -0.0368,
-0.1525, 0.1155, -0.0034, -0.0877, -0.0355, -0.2069, -0.1248, 0.0347,
0.0136, 0.0372, 0.1138, -0.0431, -0.1340, -0.0472, 0.0704, 0.0700,
0.1103, 0.0074, -0.1603, 0.0585, -0.1485],
grad_fn=<SqueezeBackward0>)
Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
Answer:
I first started with the cnn model that is in the example of the pytorch documentation and gave it some tests. After running 2 epochs the loss was always 0.00. My first guess was that the feature size was not matching between the layers so i tryed adjusting the layers to the formula:
Feature size = ((Image size + 2 * Padding size − Kernel size) / Stride)+1
After much strugling using the formula i decided with the above model that fit into 224 pixels input
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.
import torch.optim as optim
### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()
### TODO: select optimizer
optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.01)
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.
# the following import is required for training to be robust to truncated images
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
#clear gradients
optimizer.zero_grad()
#forward pass
output = model(data)
#calculate loss
loss = criterion(output, target)
#compute gradient
loss.backward()
#update parameters
optimizer.step()
#calculate train loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
#forward pass
output = model(data)
#calculate loss
loss = criterion(output, target)
## update the average validation loss
valid_loss += loss.item()*data.size(0)
train_loss = train_loss/len(loaders['train'].dataset)
valid_loss = valid_loss/len(loaders['valid'].dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss < valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# train the model
model_scratch = train(100, loader_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
#load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
Epoch: 1 Training Loss: 4.889392 Validation Loss: 4.888515 Validation loss decreased (inf --> 4.888515). Saving model ... Epoch: 2 Training Loss: 4.887668 Validation Loss: 4.887023 Validation loss decreased (4.888515 --> 4.887023). Saving model ... Epoch: 3 Training Loss: 4.885967 Validation Loss: 4.885342 Validation loss decreased (4.887023 --> 4.885342). Saving model ... Epoch: 4 Training Loss: 4.883670 Validation Loss: 4.883016 Validation loss decreased (4.885342 --> 4.883016). Saving model ... Epoch: 5 Training Loss: 4.880407 Validation Loss: 4.878689 Validation loss decreased (4.883016 --> 4.878689). Saving model ... Epoch: 6 Training Loss: 4.874915 Validation Loss: 4.871521 Validation loss decreased (4.878689 --> 4.871521). Saving model ... Epoch: 7 Training Loss: 4.867584 Validation Loss: 4.865218 Validation loss decreased (4.871521 --> 4.865218). Saving model ... Epoch: 8 Training Loss: 4.865508 Validation Loss: 4.861340 Validation loss decreased (4.865218 --> 4.861340). Saving model ... Epoch: 9 Training Loss: 4.860388 Validation Loss: 4.852155 Validation loss decreased (4.861340 --> 4.852155). Saving model ... Epoch: 10 Training Loss: 4.841278 Validation Loss: 4.816055 Validation loss decreased (4.852155 --> 4.816055). Saving model ... Epoch: 11 Training Loss: 4.794293 Validation Loss: 4.749731 Validation loss decreased (4.816055 --> 4.749731). Saving model ... Epoch: 12 Training Loss: 4.730763 Validation Loss: 4.699506 Validation loss decreased (4.749731 --> 4.699506). Saving model ... Epoch: 13 Training Loss: 4.664679 Validation Loss: 4.637422 Validation loss decreased (4.699506 --> 4.637422). Saving model ... Epoch: 14 Training Loss: 4.612943 Validation Loss: 4.632593 Validation loss decreased (4.637422 --> 4.632593). Saving model ... Epoch: 15 Training Loss: 4.580469 Validation Loss: 4.548622 Validation loss decreased (4.632593 --> 4.548622). Saving model ... Epoch: 16 Training Loss: 4.546736 Validation Loss: 4.539577 Validation loss decreased (4.548622 --> 4.539577). Saving model ... Epoch: 17 Training Loss: 4.512474 Validation Loss: 4.550806 Epoch: 18 Training Loss: 4.475593 Validation Loss: 4.507340 Validation loss decreased (4.539577 --> 4.507340). Saving model ... Epoch: 19 Training Loss: 4.451274 Validation Loss: 4.494305 Validation loss decreased (4.507340 --> 4.494305). Saving model ... Epoch: 20 Training Loss: 4.412459 Validation Loss: 4.469170 Validation loss decreased (4.494305 --> 4.469170). Saving model ... Epoch: 21 Training Loss: 4.385820 Validation Loss: 4.463912 Validation loss decreased (4.469170 --> 4.463912). Saving model ... Epoch: 22 Training Loss: 4.356483 Validation Loss: 4.462934 Validation loss decreased (4.463912 --> 4.462934). Saving model ... Epoch: 23 Training Loss: 4.318851 Validation Loss: 4.425667 Validation loss decreased (4.462934 --> 4.425667). Saving model ... Epoch: 24 Training Loss: 4.287295 Validation Loss: 4.392981 Validation loss decreased (4.425667 --> 4.392981). Saving model ... Epoch: 25 Training Loss: 4.258789 Validation Loss: 4.360056 Validation loss decreased (4.392981 --> 4.360056). Saving model ... Epoch: 26 Training Loss: 4.212999 Validation Loss: 4.377210 Epoch: 27 Training Loss: 4.184479 Validation Loss: 4.375341 Epoch: 28 Training Loss: 4.156984 Validation Loss: 4.301292 Validation loss decreased (4.360056 --> 4.301292). Saving model ... Epoch: 29 Training Loss: 4.114704 Validation Loss: 4.239886 Validation loss decreased (4.301292 --> 4.239886). Saving model ... Epoch: 30 Training Loss: 4.089867 Validation Loss: 4.243351 Epoch: 31 Training Loss: 4.047095 Validation Loss: 4.266069 Epoch: 32 Training Loss: 4.019708 Validation Loss: 4.190802 Validation loss decreased (4.239886 --> 4.190802). Saving model ... Epoch: 33 Training Loss: 3.992199 Validation Loss: 4.241649 Epoch: 34 Training Loss: 3.956684 Validation Loss: 4.218399 Epoch: 35 Training Loss: 3.914070 Validation Loss: 4.212701 Epoch: 36 Training Loss: 3.874903 Validation Loss: 4.237349 Epoch: 37 Training Loss: 3.839453 Validation Loss: 4.090529 Validation loss decreased (4.190802 --> 4.090529). Saving model ... Epoch: 38 Training Loss: 3.807030 Validation Loss: 4.062075 Validation loss decreased (4.090529 --> 4.062075). Saving model ... Epoch: 39 Training Loss: 3.775096 Validation Loss: 4.068092 Epoch: 40 Training Loss: 3.721448 Validation Loss: 4.032248 Validation loss decreased (4.062075 --> 4.032248). Saving model ... Epoch: 41 Training Loss: 3.672801 Validation Loss: 4.063045 Epoch: 42 Training Loss: 3.620872 Validation Loss: 4.099548 Epoch: 43 Training Loss: 3.567962 Validation Loss: 4.006018 Validation loss decreased (4.032248 --> 4.006018). Saving model ... Epoch: 44 Training Loss: 3.503641 Validation Loss: 3.918586 Validation loss decreased (4.006018 --> 3.918586). Saving model ... Epoch: 45 Training Loss: 3.472104 Validation Loss: 3.974407 Epoch: 46 Training Loss: 3.416370 Validation Loss: 3.937512 Epoch: 47 Training Loss: 3.332091 Validation Loss: 3.916004 Validation loss decreased (3.918586 --> 3.916004). Saving model ... Epoch: 48 Training Loss: 3.291141 Validation Loss: 3.902017 Validation loss decreased (3.916004 --> 3.902017). Saving model ... Epoch: 49 Training Loss: 3.205323 Validation Loss: 3.861680 Validation loss decreased (3.902017 --> 3.861680). Saving model ... Epoch: 50 Training Loss: 3.140106 Validation Loss: 3.868181 Epoch: 51 Training Loss: 3.068250 Validation Loss: 3.861856 Epoch: 52 Training Loss: 3.013263 Validation Loss: 3.891488 Epoch: 53 Training Loss: 2.935923 Validation Loss: 3.833490 Validation loss decreased (3.861680 --> 3.833490). Saving model ... Epoch: 54 Training Loss: 2.861135 Validation Loss: 3.921787 Epoch: 55 Training Loss: 2.773815 Validation Loss: 3.848594 Epoch: 56 Training Loss: 2.704450 Validation Loss: 3.905088 Epoch: 57 Training Loss: 2.636155 Validation Loss: 3.950334 Epoch: 58 Training Loss: 2.556215 Validation Loss: 3.819920 Validation loss decreased (3.833490 --> 3.819920). Saving model ... Epoch: 59 Training Loss: 2.472448 Validation Loss: 3.968770 Epoch: 60 Training Loss: 2.373638 Validation Loss: 3.982664 Epoch: 61 Training Loss: 2.270091 Validation Loss: 3.957459 Epoch: 62 Training Loss: 2.194291 Validation Loss: 4.004487 Epoch: 63 Training Loss: 2.128164 Validation Loss: 3.972860 Epoch: 64 Training Loss: 2.034451 Validation Loss: 4.347200 Epoch: 65 Training Loss: 1.930092 Validation Loss: 4.110896 Epoch: 66 Training Loss: 1.848131 Validation Loss: 4.073692 Epoch: 67 Training Loss: 1.772752 Validation Loss: 4.272849 Epoch: 68 Training Loss: 1.657077 Validation Loss: 4.436198 Epoch: 69 Training Loss: 1.639359 Validation Loss: 4.272867 Epoch: 70 Training Loss: 1.529748 Validation Loss: 4.195291 Epoch: 71 Training Loss: 1.459628 Validation Loss: 4.642825 Epoch: 72 Training Loss: 1.398450 Validation Loss: 4.470085 Epoch: 73 Training Loss: 1.307647 Validation Loss: 4.226277 Epoch: 74 Training Loss: 1.222971 Validation Loss: 4.572246 Epoch: 75 Training Loss: 1.197259 Validation Loss: 4.646921 Epoch: 76 Training Loss: 1.146386 Validation Loss: 4.582765 Epoch: 77 Training Loss: 1.030041 Validation Loss: 4.751989 Epoch: 78 Training Loss: 1.019715 Validation Loss: 4.816722 Epoch: 79 Training Loss: 0.936239 Validation Loss: 4.613564 Epoch: 80 Training Loss: 0.890672 Validation Loss: 4.938927 Epoch: 81 Training Loss: 0.915127 Validation Loss: 4.892101 Epoch: 82 Training Loss: 0.847747 Validation Loss: 4.957171 Epoch: 83 Training Loss: 0.796957 Validation Loss: 5.154359 Epoch: 84 Training Loss: 0.765211 Validation Loss: 5.068325 Epoch: 85 Training Loss: 0.731599 Validation Loss: 5.171645 Epoch: 86 Training Loss: 0.696405 Validation Loss: 5.118921 Epoch: 87 Training Loss: 0.665685 Validation Loss: 5.239987 Epoch: 88 Training Loss: 0.606736 Validation Loss: 5.043240 Epoch: 89 Training Loss: 0.637375 Validation Loss: 5.094911 Epoch: 90 Training Loss: 0.588122 Validation Loss: 5.530138 Epoch: 91 Training Loss: 0.581235 Validation Loss: 5.304848 Epoch: 92 Training Loss: 0.522850 Validation Loss: 5.498141 Epoch: 93 Training Loss: 0.531247 Validation Loss: 5.303794 Epoch: 94 Training Loss: 0.487821 Validation Loss: 5.548544 Epoch: 95 Training Loss: 0.488928 Validation Loss: 5.361702 Epoch: 96 Training Loss: 0.452610 Validation Loss: 5.372768 Epoch: 97 Training Loss: 0.416665 Validation Loss: 5.673854 Epoch: 98 Training Loss: 0.412517 Validation Loss: 5.957154 Epoch: 99 Training Loss: 0.416761 Validation Loss: 5.583009 Epoch: 100 Training Loss: 0.424272 Validation Loss: 5.792906
<All keys matched successfully>
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loader_scratch, model_scratch, criterion_scratch, use_cuda)
Test Loss: 3.935294 Test Accuracy: 15% (129/836)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).
If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.
## TODO: Specify data loaders
import os
import torch
from torchvision import datasets
import torchvision.transforms as transforms
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
transform_test = transforms.Compose([transforms.Resize(224),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
transform = transforms.Compose([transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
dataset = {}
loaders_transfer = {}
dataset['train'] = datasets.ImageFolder(os.path.join('dogImages/train'), transform_test)
dataset['test'] = datasets.ImageFolder(os.path.join('dogImages/test'), transform)
dataset['valid'] = datasets.ImageFolder(os.path.join('dogImages/valid'), transform)
loaders_transfer['train'] = torch.utils.data.DataLoader(dataset['train'], batch_size = 15, shuffle=True, num_workers=0)
loaders_transfer['test'] = torch.utils.data.DataLoader(dataset['test'], batch_size = 10)
loaders_transfer['valid'] = torch.utils.data.DataLoader(dataset['valid'], batch_size = 10)
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
model_transfer = models.vgg16_bn(pretrained=True)
for param in model_transfer.parameters():
param.requires_grad = False
model_transfer.classifier[6] = nn.Linear(4096, 133, bias=True)
for param in model_transfer.classifier[6].parameters():
param.requires_grad = True
if use_cuda:
model_transfer = model_transfer.cuda()
Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
Answer: I decided for the VGG16 because its a good architeture for image that has good parameters suitable for imagenet that alredy contain picture of dogs.
I froze all parameters but decided to train the ones in the classifier for myself to see the results
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.
import torch.optim as optim
criterion_transfer = nn.CrossEntropyLoss()
optimizer_transfer = optim.SGD(model_transfer.parameters(), lr=0.01)
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.
# train the model
model_transfer = train(100, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
Epoch: 1 Training Loss: 2.612879 Validation Loss: 1.181017 Validation loss decreased (inf --> 1.181017). Saving model ... Epoch: 2 Training Loss: 1.233017 Validation Loss: 0.800645 Validation loss decreased (1.181017 --> 0.800645). Saving model ... Epoch: 3 Training Loss: 0.936194 Validation Loss: 0.674867 Validation loss decreased (0.800645 --> 0.674867). Saving model ... Epoch: 4 Training Loss: 0.824795 Validation Loss: 0.594347 Validation loss decreased (0.674867 --> 0.594347). Saving model ... Epoch: 5 Training Loss: 0.727943 Validation Loss: 0.564899 Validation loss decreased (0.594347 --> 0.564899). Saving model ... Epoch: 6 Training Loss: 0.685034 Validation Loss: 0.534739 Validation loss decreased (0.564899 --> 0.534739). Saving model ... Epoch: 7 Training Loss: 0.656422 Validation Loss: 0.541734 Epoch: 8 Training Loss: 0.614956 Validation Loss: 0.519602 Validation loss decreased (0.534739 --> 0.519602). Saving model ... Epoch: 9 Training Loss: 0.582113 Validation Loss: 0.507413 Validation loss decreased (0.519602 --> 0.507413). Saving model ... Epoch: 10 Training Loss: 0.577854 Validation Loss: 0.482938 Validation loss decreased (0.507413 --> 0.482938). Saving model ... Epoch: 11 Training Loss: 0.562529 Validation Loss: 0.467366 Validation loss decreased (0.482938 --> 0.467366). Saving model ... Epoch: 12 Training Loss: 0.547566 Validation Loss: 0.462992 Validation loss decreased (0.467366 --> 0.462992). Saving model ... Epoch: 13 Training Loss: 0.519890 Validation Loss: 0.455672 Validation loss decreased (0.462992 --> 0.455672). Saving model ... Epoch: 14 Training Loss: 0.498376 Validation Loss: 0.490099 Epoch: 15 Training Loss: 0.500148 Validation Loss: 0.454600 Validation loss decreased (0.455672 --> 0.454600). Saving model ... Epoch: 16 Training Loss: 0.507216 Validation Loss: 0.457928 Epoch: 17 Training Loss: 0.495924 Validation Loss: 0.457241 Epoch: 18 Training Loss: 0.469488 Validation Loss: 0.466734 Epoch: 19 Training Loss: 0.474311 Validation Loss: 0.465533 Epoch: 20 Training Loss: 0.467176 Validation Loss: 0.443478 Validation loss decreased (0.454600 --> 0.443478). Saving model ... Epoch: 21 Training Loss: 0.446043 Validation Loss: 0.430522 Validation loss decreased (0.443478 --> 0.430522). Saving model ... Epoch: 22 Training Loss: 0.448426 Validation Loss: 0.444070 Epoch: 23 Training Loss: 0.434091 Validation Loss: 0.418937 Validation loss decreased (0.430522 --> 0.418937). Saving model ... Epoch: 24 Training Loss: 0.444278 Validation Loss: 0.442802 Epoch: 25 Training Loss: 0.412774 Validation Loss: 0.439757 Epoch: 26 Training Loss: 0.440968 Validation Loss: 0.437896 Epoch: 27 Training Loss: 0.421756 Validation Loss: 0.425798 Epoch: 28 Training Loss: 0.415110 Validation Loss: 0.425328 Epoch: 29 Training Loss: 0.413895 Validation Loss: 0.430368 Epoch: 30 Training Loss: 0.400309 Validation Loss: 0.415237 Validation loss decreased (0.418937 --> 0.415237). Saving model ... Epoch: 31 Training Loss: 0.406444 Validation Loss: 0.438446 Epoch: 32 Training Loss: 0.397863 Validation Loss: 0.421616 Epoch: 33 Training Loss: 0.406506 Validation Loss: 0.421152 Epoch: 34 Training Loss: 0.391738 Validation Loss: 0.424920 Epoch: 35 Training Loss: 0.367713 Validation Loss: 0.415680 Epoch: 36 Training Loss: 0.385051 Validation Loss: 0.417934 Epoch: 37 Training Loss: 0.375831 Validation Loss: 0.417978 Epoch: 38 Training Loss: 0.375965 Validation Loss: 0.434339 Epoch: 39 Training Loss: 0.368646 Validation Loss: 0.400417 Validation loss decreased (0.415237 --> 0.400417). Saving model ... Epoch: 40 Training Loss: 0.376518 Validation Loss: 0.405187 Epoch: 41 Training Loss: 0.371183 Validation Loss: 0.425622 Epoch: 42 Training Loss: 0.348367 Validation Loss: 0.426709 Epoch: 43 Training Loss: 0.367837 Validation Loss: 0.437932 Epoch: 44 Training Loss: 0.349732 Validation Loss: 0.403636 Epoch: 45 Training Loss: 0.351014 Validation Loss: 0.428028 Epoch: 46 Training Loss: 0.352534 Validation Loss: 0.428419 Epoch: 47 Training Loss: 0.361653 Validation Loss: 0.408701 Epoch: 48 Training Loss: 0.350573 Validation Loss: 0.406849 Epoch: 49 Training Loss: 0.347704 Validation Loss: 0.412908 Epoch: 50 Training Loss: 0.339198 Validation Loss: 0.411927 Epoch: 51 Training Loss: 0.336375 Validation Loss: 0.417306 Epoch: 52 Training Loss: 0.338202 Validation Loss: 0.421846 Epoch: 53 Training Loss: 0.351281 Validation Loss: 0.415302 Epoch: 54 Training Loss: 0.339228 Validation Loss: 0.412877 Epoch: 55 Training Loss: 0.334407 Validation Loss: 0.434391 Epoch: 56 Training Loss: 0.330468 Validation Loss: 0.409621 Epoch: 57 Training Loss: 0.323064 Validation Loss: 0.410408 Epoch: 58 Training Loss: 0.329464 Validation Loss: 0.425064 Epoch: 59 Training Loss: 0.336731 Validation Loss: 0.416017 Epoch: 60 Training Loss: 0.327516 Validation Loss: 0.430951 Epoch: 61 Training Loss: 0.313994 Validation Loss: 0.410825 Epoch: 62 Training Loss: 0.316981 Validation Loss: 0.406483 Epoch: 63 Training Loss: 0.328437 Validation Loss: 0.418080 Epoch: 64 Training Loss: 0.319917 Validation Loss: 0.386174 Validation loss decreased (0.400417 --> 0.386174). Saving model ... Epoch: 65 Training Loss: 0.317090 Validation Loss: 0.394199 Epoch: 66 Training Loss: 0.311354 Validation Loss: 0.413747 Epoch: 67 Training Loss: 0.318958 Validation Loss: 0.411925 Epoch: 68 Training Loss: 0.316144 Validation Loss: 0.423007 Epoch: 69 Training Loss: 0.310914 Validation Loss: 0.416942 Epoch: 70 Training Loss: 0.316879 Validation Loss: 0.424328 Epoch: 71 Training Loss: 0.320683 Validation Loss: 0.413274 Epoch: 72 Training Loss: 0.302789 Validation Loss: 0.413938 Epoch: 73 Training Loss: 0.311104 Validation Loss: 0.426094 Epoch: 74 Training Loss: 0.304994 Validation Loss: 0.393382 Epoch: 75 Training Loss: 0.307655 Validation Loss: 0.429122 Epoch: 76 Training Loss: 0.306521 Validation Loss: 0.421371 Epoch: 77 Training Loss: 0.296730 Validation Loss: 0.398145 Epoch: 78 Training Loss: 0.287782 Validation Loss: 0.415668 Epoch: 79 Training Loss: 0.287106 Validation Loss: 0.402193 Epoch: 80 Training Loss: 0.313072 Validation Loss: 0.396561 Epoch: 81 Training Loss: 0.292493 Validation Loss: 0.387197 Epoch: 82 Training Loss: 0.293730 Validation Loss: 0.431003 Epoch: 83 Training Loss: 0.286474 Validation Loss: 0.400926 Epoch: 84 Training Loss: 0.301340 Validation Loss: 0.392804 Epoch: 85 Training Loss: 0.288828 Validation Loss: 0.412119 Epoch: 86 Training Loss: 0.295847 Validation Loss: 0.399132 Epoch: 87 Training Loss: 0.289963 Validation Loss: 0.400423 Epoch: 88 Training Loss: 0.282921 Validation Loss: 0.421267 Epoch: 89 Training Loss: 0.284123 Validation Loss: 0.403326 Epoch: 90 Training Loss: 0.282097 Validation Loss: 0.411788 Epoch: 91 Training Loss: 0.285459 Validation Loss: 0.424447 Epoch: 92 Training Loss: 0.287915 Validation Loss: 0.407220 Epoch: 93 Training Loss: 0.278051 Validation Loss: 0.400814 Epoch: 94 Training Loss: 0.291190 Validation Loss: 0.424086 Epoch: 95 Training Loss: 0.271764 Validation Loss: 0.429150 Epoch: 96 Training Loss: 0.284526 Validation Loss: 0.394954 Epoch: 97 Training Loss: 0.278233 Validation Loss: 0.421139 Epoch: 98 Training Loss: 0.276778 Validation Loss: 0.430977 Epoch: 99 Training Loss: 0.268164 Validation Loss: 0.417392 Epoch: 100 Training Loss: 0.273684 Validation Loss: 0.417860
<All keys matched successfully>
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
Test Loss: 0.478605 Test Accuracy: 85% (716/836)
Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in dataset['train'].classes]
def predict_breed_transfer(img_path, model):
# load the image and return the predicted breed
img = Image.open(img_path)
tensor = transform(img)
tensor = tensor.unsqueeze(0)
if use_cuda:
tensor = tensor.cuda()
model = model.cuda()
output = model_transfer(tensor)
dog_class = torch.argmax(output)
plt.axis('off')
plt.imshow(img)
plt.show()
return class_names[dog_class.item()]
predict_breed_transfer('WhatsApp Image 2021-06-18 at 21.49.18.jpeg', model_transfer)
'Parson russell terrier'
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
if face_detector(img_path):
print('This human look like a {}'.format(predict_breed_transfer(img_path, model_transfer)))
elif dog_detector(img_path):
print('This dog is a {}'.format(predict_breed_transfer(img_path, model_transfer)))
else:
print('This picture has no dog or human')
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
Answer: (Three possible points for improvement)
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
run_app('WhatsApp Image 2021-06-18 at 21.49.18.jpeg')
run_app('WhatsApp Image 2021-06-18 at 18.47.01.jpeg')
run_app('csm_pabllo-vittar_e55e59a27c.jpg')
run_app('edab9ae0a2884159d7e17d5814076fc4b51f761c.jpg')
run_app('6g7ge66rpag51.jpg')
run_app('164742v800_puppy-1.jpg')
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
This dog is a Parson russell terrier
This dog is a Chihuahua
This human look like a Afghan hound
This dog is a Akita This picture has no dog or human
This dog is a American staffordshire terrier
This human look like a American water spaniel
This human look like a American water spaniel
This human look like a Silky terrier
This dog is a Tibetan mastiff
This dog is a Tibetan mastiff
This dog is a Tibetan mastiff